The search functionality is under construction.

Author Search Result

[Author] Eiji OKI(86hit)

61-80hit(86hit)

  • ePec-LDPC HARQ: An LDPC HARQ Scheme with Targeted Retransmission

    Yumei WANG  Jiawei LIANG  Hao WANG  Eiji OKI  Lin ZHANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2016/04/12
      Vol:
    E99-B No:10
      Page(s):
    2168-2178

    In 3GPP (3rd Generation Partnership Project) LTE (Long Term Evolution) systems, when HARQ (Hybrid Automatic Repeat request) retransmission is invoked, the data at the transmitter are retransmitted randomly or sequentially regardless of their relationship to the wrongly decoded data. Such practice is inefficient since precious transmission resources will be spent to retransmit data that may be of no use in error correction at the receiver. This paper proposes an incremental redundancy HARQ scheme based on Error Position Estimating Coding (ePec) and LDPC (Low Density Parity Check Code) channel coding, which is called ePec-LDPC HARQ. The proposal is able to feedback the wrongly decoded code blocks within a specific MAC (Media Access Control) PDU (Protocol Data Unit) from the receiver. The transmitter gets the feedback information and then performs targeted retransmission. That is, only the data related to the wrongly decoded code blocks are retransmitted, which can improve the retransmission efficiency and thus reduce the retransmission overload. An enhanced incremental redundancy LDPC coding approach, called EIR-LDPC, together with a physical layer framing method, is developed to implement ePec-LDPC HARQ. Performance evaluations show that ePec-LDPC HARQ reduces the overall transmission resources by 15% compared to a conventional LDPC HARQ scheme. Moreover, the average retransmission times of each MAC PDU and the transmission delay are also reduced considerably.

  • Optimum Route Design in 1+1 Protection with Network Coding for Instantaneous Recovery

    Abu Hena Al MUKTADIR  Eiji OKI  

     
    PAPER-Internet

      Vol:
    E97-B No:1
      Page(s):
    87-104

    1+1 protection provides instantaneous proactive recovery from any single link failure by duplicating and sending the same source data onto two disjoint paths. Other resource efficient recovery techniques to deal with single link failure require switching operations at least at both ends, which restrict instantaneous recovery. However, the 1+1 protection technique demands at least double network resources. Our goal is to minimize the resources required for 1+1 protection while maintaining the advantage of instantaneous recovery. It was reported that the network coding (NC) technique reduces resource utilization in 1+1 protection, and in order to determine an optimum NC aware set of routes that minimizes the required network resources for 1+1 protection, an Integer Quadratic Programming (IQP) formulation has already been addressed. Solving an IQP problem requires large amount of memory (cannot be determined exactly) and special algorithms by the mathematical programming solver. In this paper our contributions consist of two parts. First, we formulate the optimization problem, corresponding to the IQP model, as an Integer Linear Programming (ILP) formulation, which is solvable by any linear programming solver, and so its memory and time requirements are smaller. However, the presented ILP model works well in small-scale and medium-scale networks, but fails to support large-scale networks due to excessive memory requirements and calculation time. Second, to deal with these issues, a heuristic algorithm is proposed to determine the best possible NC aware set of routes in large-scale networks. Numerical results show that our strategies achieve almost double the resource saving effect than the conventional minimal-cost routing policy in the examined medium-scale and large scale networks.

  • Framework for PCE Based Multi-Layer Service Networks

    Mallik TATIPAMULA  Eiji OKI  Ichiro INOUE  Kohei SHIOMOTO  Zafar ALI  

     
    SURVEY PAPER-Traffic Engineering and Multi-Layer Networking

      Vol:
    E90-B No:8
      Page(s):
    1903-1911

    Implementing the fast-responding multi-layer service network (MLSN) functionality will allow the IP/MPLS service network logical topology and Optical Virtual Network topology to be reconfigured dynamically according to the traffic pattern on the network. Direct links can be created or removed in the logical IP/MPLS service network topology, when either extra capacity in MLSN core is needed or existing capacity in core is no longer required. Reconfiguring the logical and virtual network topologies constitute a new manner by which Traffic Engineering (TE) can solve or avoid network congestion problems and service degradations. As both IP and optical network layers are involved, this is called Multi-layer Traffic Engineering. We proposed border model based MLSN architecture in [5]. In this paper, we define the realization of Multi-Layer TE functions using Path Computation Element (PCE) for Border model based MLSN. It defines nodal requirements for multi-layer TE. Requirements of communication protocol between PCC (Path Computation Client) and PCE is introduced. It presents Virtual Network Topology (VNT) scenarios and steps involved along with examples for PCE-based VNT reconfiguration triggered by network failure, where VNT is a set of different layer's network resource accumulation.

  • Performance Evaluation of Dynamic Multi-Layer Routing Schemes in Optical IP Networks

    Eiji OKI  Kohei SHIOMOTO  Masaru KATAYAMA  Wataru IMAJUKU  Naoaki YAMANAKA  Yoshihiro TAKIGAWA  

     
    PAPER-Network

      Vol:
    E87-B No:6
      Page(s):
    1577-1583

    This paper presents two dynamic multi-layer routing policies for optical IP Networks. Both policies first try to allocate a newly requested electrical path to an existing optical path that directly connects the source and destination nodes. If such a path is not available, the two policies employ different procedures. Policy 1, which has been published already, tries to find available existing optical paths with two or more hops that connect the source and destination nodes. Policy 2, which is proposed in this paper, tries to establish a new one-hop optical path between source and destination nodes. The performances of the two routing policies are evaluated. Simulation results suggest that policy 2 outperforms policy 1 if p is large, where p is the number of packet-switching-capable ports; the reverse is true only if p is small. We observe that p is the key factor in choosing the most appropriate routing policy.

  • Performance of Optimal Routing by Pipe, Hose, and Intermediate Models

    Eiji OKI  Ayako IWAKI  

     
    PAPER-Network

      Vol:
    E93-B No:5
      Page(s):
    1180-1189

    This paper compares the performances of optimal routing as yielded by the pipe, hose, and intermediate models. The pipe model, which is specified by the exact traffic matrix, provides the best routing performance, but the traffic matrix is difficult to measure and predict accurately. On the other hand, the hose model is specified by just the total outgoing/incoming traffic from/to each node, but it has a problem in that its routing performance is degraded compared to the pipe model, due to insufficient traffic information. The intermediate model, where the upper and lower bounds of traffic demands for source-destination pairs are added as constraints, is a construction that lies between the pipe and hose models. The intermediate model, which lightens the difficulty of the pipe model, but narrows the range of traffic conditions specified by the hose model, offers better routing performance than the hose model. An optimal-routing formulation extended from the pipe model to the intermediate model can not be solved as a regular linear programming (LP) problem. Our solution, the introduction of a duality theorem, turns our problem into an LP formulation that can be easily solved. Numerical results show that the network congestion ratio for the pipe model is much lower than that of hose model. The differences in network congestion ratios between the pipe and hose models lie in the range from 27% to 45% for the various network topologies examined. The intermediate model offers better routing performance than the hose model. The intermediate model reduces the network congestion ratio by 34% compared to the hose model in an experimental network, when the upper-bound and lower-bound margins are set to 25% and 20%, respectively.

  • Load-Balanced Non-split Shortest-Path-Based Routing with Hose Model and Its Demonstration

    Shunichi TSUNODA  Abu Hena Al MUKTADIR  Eiji OKI  

     
    PAPER-Internet

      Vol:
    E96-B No:5
      Page(s):
    1130-1140

    Smart OSPF (S-OSPF), a load balancing, shortest-path-based routing scheme, was introduced to improve the routing performances of networks running on OSPF assuming that exact traffic demands are known. S-OSPF distributes traffic from a source node to neighbor nodes, and after reaching the neighbor nodes, traffic is routed according to the OSPF protocol. However, in practice, exact traffic demands are difficult to obtain, and the distribution of unequal traffic to multiple neighbor nodes requires complex functionalities at the source. This paper investigates non-split S-OSPF with the hose model, in which only the total amount of traffic that each node injects into the network and the total amount of traffic each node receives from the network are known, for the first time, with the goal of minimizing the network congestion ratio (maximum link utilization over all links). In non-split S-OSPF, traffic from a source node to a destination node is not split over multiple routes, in other words, it goes via only one neighbor node to the destination node. The routing decision with the hose model is formulated as an integer linear programming (ILP) problem. Since the ILP problem is difficult to solve in a practical time, this paper proposes a heuristic algorithm. In the routing decision process, the proposed algorithm gives the highest priority to the node pair that has the highest product of the total amount of injected traffic by one node and total amount of received traffic by the other node in the pair, where both traffic volumes are specified in the hose model, and enables a source node to select the neighbor node that minimizes network congestion ratio for the worst case traffic condition specified by the hose model. The non-split S-OSPF scheme's network congestion ratios are compared with those of the split S-OSPF and classical shortest path routing (SPR) schemes. Numerical results show that the non-split S-OSPF scheme offers lower network congestion ratios than the classical SPR scheme, and achieves network congestion ratios comparable to the split S-OSPF scheme for larger networks. To validate the non-split S-OSPF scheme, using a testbed network experimentally, we develop prototypes of the non-split S-OSPF path computation server and the non-split S-OSPF router. The functionalities of these prototypes are demonstrated in a non-split S-OSPF network.

  • Multicast Routing Model to Minimize Number of Flow Entries in Software-Defined Network Open Access

    Seiki KOTACHI  Takehiro SATO  Ryoichi SHINKUMA  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2020/11/13
      Vol:
    E104-B No:5
      Page(s):
    507-518

    The Software-defined network (SDN) uses a centralized SDN controller to store flow entries in the flow table of each SDN switch; the entries in the switch control packet flows. When a multicast service is provided in an SDN, the SDN controller stores a multicast entry dedicated for a multicast group in each SDN switch. Due to the limited capacity of each flow table, the number of flow entries required to set up a multicast tree must be suppressed. A conventional multicast routing scheme suppresses the number of multicast entries in one multicast tree by replacing some of them with unicast entries. However, since the conventional scheme individually determines a multicast tree for each request, unicast entries dedicated to the same receiver are distributed to various SDN switches if there are multiple multicast service requests. Therefore, further reduction in the number of flow entries is still possible. In this paper, we propose a multicast routing model for multiple multicast requests that minimizes the number of flow entries. This model determines multiple multicast trees simultaneously so that a unicast entry dedicated to the same receiver and stored in the same SDN switch is shared by multicast trees. We formulate the proposed model as an integer linear programming (ILP) problem. In addition, we develop a heuristic algorithm which can be used when the ILP problem cannot be solved in practical time. Numerical results show that the proposed model reduces the required number of flow entries compared to two benchmark models; the maximum reduction ratio is 49.3% when the number of multicast requests is 40.

  • Integrated Physical and Logical Layer Design of Multimedia ATM Networks

    Soumyo D. MOITRA  Eiji OKI  Naoaki YAMANAKA  

     
    LETTER-Communication Networks and Services

      Vol:
    E82-B No:9
      Page(s):
    1531-1540

    This letter proposes an integrated approach to multimedia ATM network design. An optimization model that combines the physical layer design with the logical layer design is developed. A key feature of the model is that the objective to be maximized is a profit function. It includes more comprehensive cost functions for the physical and logical layers. A simple heuristic algorithm to solve the model is presented. It should be useful in practice for network designers and operators. Some numerical examples are given to illustrate the application of the model and the algorithm.

  • Defragmentation with Reroutable Backup Paths in Toggled 1+1 Protection Elastic Optical Networks

    Takaaki SAWA  Fujun HE  Takehiro SATO  Bijoy Chand CHATTERJEE  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2019/09/03
      Vol:
    E103-B No:3
      Page(s):
    211-223

    This paper proposes a defragmentation scheme using reroutable backup paths in toggled-based quasi 1+1 path protected elastic optical networks (EONs) to improve the efficiency of defragmentation and suppress the fragmentation effect. The proposed scheme can reallocate spectrum slots of backup paths and reroute of backup paths. The path exchange function of the proposed scheme makes the primary paths become the backup state while the backup paths become the primary. This allows utilization of the advantages of defragmentation in both primary and backup paths. We formulate a static spectrum reallocation problem with rerouting (SSRR) in the toggled-based quasi 1+1 path protected EON as an integer linear programming (ILP) problem. The decision version of SSRR is proven to be an NP-complete problem. A heuristic algorithm is introduced to solve the problem for large networks networks where the ILP problem is not tractable. For a dynamic traffic scenario, an approach that suppresses the fragmentation considering rerouting and path exchanging operations is presented. We evaluate the performances of the proposed scheme by comparing it to the conventional scheme in terms of dependencies on node degree, processing time of network operations and interval time between scheduled defragmentations. The numerical results obtained from the performance evaluation indicate that the proposed scheme increases the traffic admissibility compared to the conventional scheme.

  • Scalable 3-Stage ATM Switch Architecture Using Optical WDM Grouped Links Based on Dynamic Bandwidth Sharing

    Kohei NAKAI  Eiji OKI  Naoaki YAMANAKA  

     
    PAPER-Packet and ATM Switching

      Vol:
    E82-C No:2
      Page(s):
    213-218

    This paper proposes a 3-stage ATM switch architecture that uses optical WDM (wavelength division multiplexing) grouped links and dynamic bandwidth sharing. The proposed architecture has two features. The first is the use of WDM technology which makes the number of cables used in the system proportional to system size. The second is the use of dynamic bandwidth sharing among WDM grouped links. This prevents the statistical multiplexing gain offered by WDM from falling even if switching system becomes large. A performance evaluation confirms the scaleability and cost-effectiveness of the proposed architecture. It is scaleable in terms of the number of cables and admissible load. We show how the appropriate wavelength signal speed can be determined to implement the switch in a cost-effective manner. Therefore, the proposed architecture will suit future high-speed multimedia ATM networks.

  • High-Speed Multi-Stage ATM Switch Based on Hierarchical Cell Resequencing Architecture and WDM Interconnection

    Seisho YASUKAWA  Naoaki YAMANAKA  Eiji OKI  Ryusuke KAWANO  

     
    PAPER-Packet and ATM Switching

      Vol:
    E82-C No:2
      Page(s):
    219-228

    This paper proposesd a non-blocking multi-stage ATM switch based on a hierarchical-cell-resequencing (HCR) mechanism and high-speed WDM interconnection and reports on its feasibility study. In a multi-stage ATM switch, cell-based routing is effective to make the switch non-blocking, because all traffic is randomly distributed over intermediate switching stages. But due to the multi-path conditions, cells may arrive out of sequence at the output of the switching fabric. Therefore, resequencing must be performed either at each output of the final switching stage or at the output of each switching stage. The basic HCR switch performs cell resequencing in a hierarchical manner when switching cells from an input-lines to a output-line. As a result, the cell sequence in each output of the basic HCR switch is recovered. A multi-stage HCR switch is constructed by interconnecting the input-lines and output-lines of these basic HCR switches in a hierarchical manner. Therefore, the cell sequence in each final output of the switching fabric is conserved in a hierarchical manner. In this way, cell-based routing becomes possible and a multi-stage ATM switch with the HCR mechanism can achieve 100% throughput without any internal speed-up techniques. Because a large-capacity multi-stage HCR switch needs a huge number of high-speed signal interconnections, a breakthrough in compact optical interconnection technology is required. Therefore, this paper proposes a WDM interconnection system with an optical router arrayed waveguide filter (AWGF) that interconnects high-speed switch elements effectively and reports its feasibility study. In this architecture, each switch element is addressed by a unique wavelength. As a result, a switch in a previous stage can transmit a cell to any switch in the next stage by only selecting its cell transmission wavelength. To make this system feasible, we developed a wide-channel-spacing optical router AWGF and compact 10-Gbit/s optical transmitter and receiver modules with a compact high-power electroabsorption distributed feedback (EA-DFB) laser and a new bit decision circuit. Using these modules, we confirmed stable operation of the WDM interconnection. This switch architecture and WDM interconnection system should enable the development of high-speed ATM switching systems that can achieve throughput of over 1 Tbit/s.

  • A Dynamic Reference Single-Ended ECL Input Interface Circuit for MCM-Based 80-Gbps ATM Switch

    Ryusuke KAWANO  Naoaki YAMANAKA  Eiji OKI  Tomoaki KAWAMURA  

     
    PAPER-Silicon Devices

      Vol:
    E82-C No:3
      Page(s):
    519-525

    A high-speed dynamic reference single-ended ECL input-interface circuit has been fabricated for advanced ATM switching MCMs. To raise the limit on the number of I/O pins, this circuit operates with a reference signal directly generated from the input signal itself. The reference level is changed dynamically to achieve a larger noise margin for operation. Experimental results show that operation up to 3.4 Gbps with a large level margin can be attained. We deploy this circuit to the input interface LSIs of an 80-Gbps ATM switching MCM.

  • Preventive Start-Time Optimization Considering Both Failure and Non-Failure Scenarios

    Stephane KAPTCHOUANG  Ihsen AZIZ OUÉDRAOGO  Eiji OKI  

     
    PAPER-Internet

      Pubricized:
    2017/01/06
      Vol:
    E100-B No:7
      Page(s):
    1124-1132

    This paper proposes a Preventive Start-time Optimization with no penalty (PSO-NP). PSO-NP determines a suitable set of Open Shortest Path First (OSPF) link weights at the network operation start time that can handle any link failure scenario preventively while considering both failure and non failure scenarios. Preventive Start-time Optimization (PSO) was designed to minimize the worst case congestion ratio (maximum link utilization over all the links in the network) in case of link failure. PSO considers all failure patterns to determine a link weight set that counters the worst case failure. Unfortunately, when there is no link failure, that link weight set leads to a higher congestion ratio than that of the conventional start-time optimization scheme. This penalty is perpetual and thus a burden especially in networks with few failures. In this work, we suppress that penalty while reducing the worst congestion ratio by considering both failure and non failure scenarios. Our proposed scheme, PSO-NP, is simple and effective in that regard. We expand PSO-NP into a Generalized Preventive Start-time Optimization (GPSO) to find a link weight set that balances both the penalty under no failure and the congestion ratio under the worst case failure. Simulation results show that PSO-NP achieves substantial congestion reduction for any failure case while suppressing the penalty in case of no failure in the network. In addition, GPSO as framework is effective in determining a suitable link weight set that considers the trade off between the penalty under non failure and the worst case congestion ratio reduction.

  • Delay Distribution Based Remote Data Fetch Scheme for Hadoop Clusters in Public Cloud

    Ravindra Sandaruwan RANAWEERA  Eiji OKI  Nattapong KITSUWAN  

     
    PAPER-Network

      Pubricized:
    2019/02/04
      Vol:
    E102-B No:8
      Page(s):
    1617-1625

    Apache Hadoop and its ecosystem have become the de facto platform for processing large-scale data, or Big Data, because it hides the complexity of distributed computing, scheduling, and communication while providing fault-tolerance. Cloud-based environments are becoming a popular platform for hosting Hadoop clusters due to their low initial cost and limitless capacity. However, cloud-based Hadoop clusters bring their own challenges due to contradictory design principles. Hadoop is designed on the shared-nothing principle while cloud is based on the concepts of consolidation and resource sharing. Most of Hadoop's features are designed for on-premises data centers where the cluster topology is known. Hadoop depends on the rack assignment of servers (configured by the cluster administrator) to calculate the distance between servers. Hadoop calculates the distance between servers to find the best remote server from which to fetch data from when fetching non-local data. However, public cloud environment providers do not share rack information of virtual servers with their tenants. Lack of rack information of servers may allow Hadoop to fetch data from a remote server that is on the other side of the data center. To overcome this problem, we propose a delay distribution based scheme to find the closest server to fetch non-local data for public cloud-based Hadoop clusters. The proposed scheme bases server selection on the delay distributions between server pairs. Delay distribution is calculated measuring the round-trip time between servers periodically. Our experiments observe that the proposed scheme outperforms conventional Hadoop nearly by 12% in terms of non-local data fetch time. This reduction in data fetch time will lead to a reduction in job run time, especially in real-world multi-user clusters where non-local data fetching can happen frequently.

  • Experimental 5-Tb/s Packet-by-Packet Wavelength Switching System Using 2.5 -Gb/s 8-λ WDM Links

    Kimihiro YAMAKOSHI  Nobuaki MATSUURA  Kohei NAKAI  Eiji OKI  Naoaki YAMANAKA  Takaharu OHYAMA  Yuji AKAHORI  

     
    PAPER-Switching

      Vol:
    E85-B No:7
      Page(s):
    1293-1301

    We have developed an experimental 5-Tb/s packet-by-packet wavelength switching system, OPTIMA-2. This paper describes its hardware architecture. OPTIMA-2 is a non-blocking 3-stage switch using optical wavelength division multiplexing (WDM) links and dynamic bandwidth-sharing. A new scheduling algorithm for variable-length packets is used for the receiver ports of WDM links and simulation results show that it can suppress short-packet delay while keeping high throughput. An implementation of the WDM link using field programable gate arrays and a compact planar lightwave circuit platform is described. Experimental results for the basic operation of optical wavelength switching are also presented.

  • A Pipelined Maximal-Sized Matching Scheme for High-Speed Input-Buffered Switches

    Eiji OKI  Roberto ROJAS-CESSA  H. Jonathan CHAO  

     
    PAPER-Switching

      Vol:
    E85-B No:7
      Page(s):
    1302-1311

    This paper proposes an innovative Pipeline-based Maximal-sized Matching scheduling approach, called PMM, for input-buffered switches. It dramatically relaxes the limitation of a single time slot for completing a maximal matching into any number of time slots. In the PMM approach, arbitration is operated in a pipelined manner, where K subschedulers are used. Each subscheduler is allowed to take more than one time slot for its matching. Every time slot, one of the subschedulers provides the matching result. We adopt an extended version of Dual Round-Robin Matching (DRRM), called iterative DRRM (iDRRM), as a maximal matching algorithm in a subscheduler. PMM maximizes the efficiency of the adopted arbitration scheme by allowing sufficient time for the number of iterations. We show that PMM preserves 100% throughput under uniform traffic and fairness for best-effort traffic of the non-pipelined adopted algorithm, while ensuring that cells from the same virtual output queue (VOQ) are transmitted in sequence. In addition, we confirm that the delay performance of PMM is not significantly degraded by increasing the pipeline degree, or the number of subschedulers, when the number of outstanding requests for each subscheduler from a VOQ is limited to 1.

  • A High-Speed ATM Switch Based on Scalable Distributed Arbitration

    Eiji OKI  Naoaki YAMANAKA  

     
    LETTER-Switching and Communication Processing

      Vol:
    E80-B No:9
      Page(s):
    1372-1376

    This paper proposes a high-speed crosspoint-buffer-type ATM switch, named Scalable-Distributed -Arbitration (SDA) switch. The SDA switch employs a new arbitration scheme that allows the switch to be scalable. The SDA switch has a crosspoint buffer and a transit buffer at every crosspoint. Arbitration is executed between the crosspoint buffer and the transit buffer. The arbitration selects a cell based on delay time using a synchronous counter. The selected cell is transferred from a crosspoint buffer to the output port by way of several transit buffers. Since arbitration is executed in a distributed manner at each crosspoint and the arbitration time does not depend on the switch size, the SDA switch can be expanded to realize large throughput. Numerical results show that the SDA switch ensures fairness in terms of delay time. In addition, the maximum delay time and the required crosspoint buffer size of the SDA switch are reduced, compared with those in the conventional switch based on ring arbitration. Thus, the proposed SDA switch based on the new arbitration scheme has a simple and expandable architecture,and will be suitable for future high-speed multimedia ATM networks.

  • Heuristic Approach to Distributed Server Allocation with Preventive Start-Time Optimization against Server Failure

    Souhei YANASE  Shuto MASUDA  Fujun HE  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2021/02/01
      Vol:
    E104-B No:8
      Page(s):
    942-950

    This paper presents a distributed server allocation model with preventive start-time optimization against a single server failure. The presented model preventively determines the assignment of servers to users under each failure pattern to minimize the largest maximum delay among all failure patterns. We formulate the proposed model as an integer linear programming (ILP) problem. We prove the NP-completeness of the considered problem. As the number of users and that of servers increase, the size of ILP problem increases; the computation time to solve the ILP problem becomes excessively large. We develop a heuristic approach that applies simulated annealing and the ILP approach in a hybrid manner to obtain the solution. Numerical results reveal that the developed heuristic approach reduces the computation time by 26% compared to the ILP approach while increasing the largest maximum delay by just 3.4% in average. It reduces the largest maximum delay compared with the start-time optimization model; it avoids the instability caused by the unnecessary disconnection permitted by the run-time optimization model.

  • Network Optimization for Energy Saving Considering Link Failure with Uncertain Traffic Conditions

    Ravindra Sandaruwan RANAWEERA  Ihsen Aziz OUÉDRAOGO  Eiji OKI  

     
    PAPER-Network

      Vol:
    E97-B No:12
      Page(s):
    2729-2738

    The energy consumption of the Internet has a huge impact on the world economy and it is likely to increase every year. In present backbone networks, pairs of nodes are connected by “bundles” of multiple physical cables that form one logical link and energy saving can be achieved by shutting down unused network resources. The hose model can support traffic demand variations among node pairs in different time periods because it accommodates multiple traffic matrices unlike the pipe model which supports only one traffic matrix. This paper proposes an OSPF (Open Shortest Path First) link weight optimization scheme to reduce the network resources used for the hose model considering single link failures. The proposed scheme employs a heuristic algorithm based on simulated annealing to determine a suitable set of link weights to reduce the worst-case total network resources used, and considering any single link failure preemptively. It efficiently selects the worst-case performance link-failure topology and searches for a link weight set that reduces the worst-case total network resources used. Numerical results show that the proposed scheme is more effective in the reduction of worst-case total network resources used than the conventional schemes, Start-time Optimization and minimum hop routing.

  • Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter

    Kunitaka ASHIZAWA  Takehiro SATO  Kazumasa TOKUHASHI  Daisuke ISHII  Satoru OKAMOTO  Naoaki YAMANAKA  Eiji OKI  

     
    PAPER

      Vol:
    E95-B No:3
      Page(s):
    730-739

    This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30 msec, which is acceptable for on-demand broadcast services.

61-80hit(86hit)